32 research outputs found

    Layer-Dependent Attentional Processing by Top-down Signals in a Visual Cortical Microcircuit Model

    Get PDF
    A vast amount of information about the external world continuously flows into the brain, whereas its capacity to process such information is limited. Attention enables the brain to allocate its resources of information processing to selected sensory inputs for reducing its computational load, and effects of attention have been extensively studied in visual information processing. However, how the microcircuit of the visual cortex processes attentional information from higher areas remains largely unknown. Here, we explore the complex interactions between visual inputs and an attentional signal in a computational model of the visual cortical microcircuit. Our model not only successfully accounts for previous experimental observations of attentional effects on visual neuronal responses, but also predicts contrasting differences in the attentional effects of top-down signals between cortical layers: attention to a preferred stimulus of a column enhances neuronal responses of layers 2/3 and 5, the output stations of cortical microcircuits, whereas attention suppresses neuronal responses of layer 4, the input station of cortical microcircuits. We demonstrate that the specific modulation pattern of layer-4 activity, which emerges from inter-laminar synaptic connections, is crucial for a rapid shift of attention to a currently unattended stimulus. Our results suggest that top-down signals act differently on different layers of the cortical microcircuit

    Is a 4-bit synaptic weight resolution enough? - Constraints on enabling spike-timing dependent plasticity in neuromorphic hardware

    Get PDF
    Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing-dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists

    Layer dependent neural modulation of a realistic layered-microcircuit model in visual cortex based on bottom-up and top-down signals

    Get PDF
    The visual attention allocates its resources of information processing to selected sensory inputs for reducing the load of the processing. Here, we carried out the large-scale simulation with the layered-microcircuit model of the visual cortex based on the current knowledge of cortical neurobiology in order to explore the complex interaction between bottom-up visual input and top-down attentional signal. The visual microcircuit model consisted of about 40,000 integrate-and-fire model neurons and represented layers 2/3, 4, 5 and 6 (Tobias C. Potjans, et al., 2009). Two these microcircuits interacted by lateral inhibition through layer 2/3. Top-down attention was applied to layer 2/3 to facilitate the process of visual stimuli. In order to investigate the mechanism of visual processing and attentional effect in the layered-microcircuit model, we simulated a model with a variety of lateral inhibition and compared to physiological results for neural and attentional modulation (Reynolds et al., 1999). Specifically, a model with the lateral inhibition between presynaptic excitatory and postsynaptic inhibitory neurons indicated the layer-dependent modulation due to the presentation of the anti-preferred stimulus, which suppressed the response to the preferred stimulus in output layers such as layers 2/3 and 5. However, attention recovered the response to the preferred stimulus in these layers. These modulations due to the stimuli and attention agreed with the physiology. The general principle of the canonical cortical circuit was the detailed structure of a column, top-down attentional bias to layer 2/3 and the lateral inhibition from presynaptic excitatory to postsynaptic inhibitory neurons through layer 2/3. Our model allowed us to investigate how the bottom-up visual input and the top-down attentional signal may interact with each other in the laminar structure of the column. These results and functions provided important prediction for attention in visual processing

    Meeting the Memory Challenges of Brain-Scale Network Simulation

    Get PDF
    The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project

    Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework

    Get PDF
    MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules
    corecore